-
-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: set wsl.useWindowsDriver when the nvidia-ctk is enabled #478
base: main
Are you sure you want to change the base?
Conversation
02f4967
to
97fa52c
Compare
I just tested this following NixOS/nixpkgs#312253 and can confirm it works fine with config.hardware.nvidia-container-toolkit.enable. However, I don't think this should be limited only to wsl.docker-desktop.enable, since Podman users, for example, may want this feature by default without having to install Docker on their system. |
97fa52c
to
179a541
Compare
Thanks a lot for checking @loicreynier! Good point, I have updated the PR to take into account |
Looks good to me! Tested on a Podman setup and works as expected. |
179a541
to
db74ebe
Compare
This improves the user experience as whenever the user enables the `config.hardware.nvidia-container-toolkit.enable` option, they cannot use their Nvidia GPU's within the Docker containers because of missing libraries. This gets fixed by setting `wsl.useWindowsDriver` explicitly when the user requests to enable GPU support on Docker containers. Issue and fix provided by @qwqawawow Related: nix-community#433 Related: NVIDIA/nvidia-container-toolkit#452
db74ebe
to
5069440
Compare
- Update neovim-nightly-flake - Update docker used in Nixos-WSL to use v25 and nvidia-container-toolkit (nix-community/NixOS-WSL#478) - reorganize emacs-novelist (still WIP) as scaffold to analyse extraPackages further
Can this be force-pushed ? |
Excuse me, should the file nvidia-container-toolkit.json partly depend on NixOS's nvidia driver? Here is {
"containerPath": "/usr/bin/nvidia-smi",
"hostPath": "**/nix/store/5s8mapl88avcqrggazg5kvm1y6b36a5p-nvidia-x11-555.58-6.6.36-bin/bin/nvidia-smi**",
"options": [
"ro",
"nosuid",
"nodev",
"bind"
]
},
{
"containerPath": "/usr/local/nvidia/lib",
"hostPath": "/**nix/store/qb18b5vz3rd6jch2klqfkr9zv9pxnj06-nvidia-x11-555.58-6.6.36/lib**",
"options": [
"ro",
"nosuid",
"nodev",
"bind"
]
},
{
"containerPath": "/usr/local/nvidia/lib64",
"hostPath": "/nix/store/qb18b5vz3rd6jch2klqfkr9zv9pxnj06-nvidia-x11-555.58-6.6.36/lib",
"options": [
"ro",
"nosuid",
"nodev",
"bind"
]
} |
Yes. This is expected. |
This improves the user experience as whenever the user enables the
config.hardware.nvidia-container-toolkit.enable
option, they cannot use their Nvidia GPU's within the Docker containers because of missing libraries.This gets fixed by setting
wsl.useWindowsDriver
explicitly when the user requests to enable GPU support on Docker containers.Issue and fix provided by @qwqawawow
Related: #433
Related: NVIDIA/nvidia-container-toolkit#452
Note: unsure if this is the right approach for this project. I am the maintainer of the CDI-generation for NixOS, and we got a couple of reports that users could not use their GPU's inside Docker containers.
I cannot test it since I don't use NixOS-WSL, but I thought it would be good to improve the user experience here a bit if possible.
Thanks :)